8 research outputs found

    A Federated Learning Framework for Stenosis Detection

    Full text link
    This study explores the use of Federated Learning (FL) for stenosis detection in coronary angiography images (CA). Two heterogeneous datasets from two institutions were considered: Dataset 1 includes 1219 images from 200 patients, which we acquired at the Ospedale Riuniti of Ancona (Italy); Dataset 2 includes 7492 sequential images from 90 patients from a previous study available in the literature. Stenosis detection was performed by using a Faster R-CNN model. In our FL framework, only the weights of the model backbone were shared among the two client institutions, using Federated Averaging (FedAvg) for weight aggregation. We assessed the performance of stenosis detection using Precision (P rec), Recall (Rec), and F1 score (F1). Our results showed that the FL framework does not substantially affects clients 2 performance, which already achieved good performance with local training; for client 1, instead, FL framework increases the performance with respect to local model of +3.76%, +17.21% and +10.80%, respectively, reaching P rec = 73.56, Rec = 67.01 and F1 = 70.13. With such results, we showed that FL may enable multicentric studies relevant to automatic stenosis detection in CA by addressing data heterogeneity from various institutions, while preserving patient privacy

    A review on deep-learning algorithms for fetal ultrasound-image analysis

    No full text
    Deep-learning (DL) algorithms are becoming the standard for processing ultrasound (US) fetal images. Despite a large number of survey papers already present in this field, most of them are focusing on a broader area of medical-image analysis or not covering all fetal US DL applications. This paper surveys the most recent work in the field, with a total of 145 research papers published after 2017. Each paper is analyzed and commented on from both the methodology and application perspective. We categorized the papers in (i) fetal standard-plane detection, (ii) anatomical-structure analysis, and (iii) biometry parameter estimation. For each category, main limitations and open issues are presented. Summary tables are included to facilitate the comparison among the different approaches. Publicly-available datasets and performance metrics commonly used to assess algorithm performance are summarized, too. This paper ends with a critical summary of the current state of the art on DL algorithms for fetal US image analysis and a discussion on current challenges that have to be tackled by researchers working in the field to translate the research methodology into the actual clinical practice

    Quantification of fetal ST-segment deviations

    No full text
    By fetal electrocardiogram (FECG) analysis it has been found that changes in the ST segment are associated with acid-base status, and thus fetus health state. Currently, the most popular estimation of fetal STsegment deviations is performed as ratio between T-wave height and QRS-complex amplitude using the STAN monitor. Thus, this evaluation is indirect because not directly derived from measurements on the ST segment. This study proposes a new procedure for an automated direct quantification of fetal ST-segment deviations, which are described in terms of ST-amplitude and STtrend. Particularly, ST-amplitude corresponds to the maximum of the mean amplitude values obtained through a moving-average (15 ms) operation over the ST segment. Instead, ST-trend corresponds to the difference between the ST-segment amplitudes calculated in the first and the last of three intervals in which the ST segment is divided; thus, ST-trend sign indicates a ST-segment elevation (positive sign) or depression (negative sign). The procedure was evaluated on five direct FECG recordings (in https://physionet.org/physiobank/database/adfecgdb/). Mean values (over population) of ST-amplitude and STtrend were 9.6 ± 5.5 ΌV and 1.4 ± 2.3 ΌV, respectively. All found values were validated by visual inspection of the magnified FECG plots

    A deep learning approach to median nerve evaluation in ultrasound images of carpal tunnel inlet

    No full text
    Ultrasound (US) imaging is recognized as a useful support for Carpal Tunnel Syndrome (CTS) assessment through the evaluation of median nerve morphology. However, US is still far to be systematically adopted to evaluate this common entrapment neuropathy, due to US intrinsic challenges, such as its operator dependency and the lack of standard protocols. To support sonographers, the present study proposes a fully-automatic deep learning approach to median nerve segmentation from US images. We collected and annotated a dataset of 246 images acquired in clinical practice involving 103 rheumatic patients, regardless of anatomical variants (bifid nerve, closed vessels). We developed a Mask R-CNN with two additional transposed layers at segmentation head to accurately segment the median nerve directly on transverse US images. We calculated the cross-sectional area (CSA) of the predicted median nerve. Proposed model achieved good performances both in median nerve detection and segmentation: Precision (Prec), Recall (Rec), Mean Average Precision (mAP) and Dice Similarity Coefficient (DSC) values are 0.916 ± 0.245, 0.938 ± 0.233, 0.936 ± 0.235 and 0.868 ± 0.201, respectively. The CSA values measured on true positive predictions were comparable with the sonographer manual measurements with a mean absolute error (MAE) of 0.918 mm2. Experimental results showed the potential of proposed model, which identified and segmented the median nerve section in normal anatomy images, while still struggling when dealing with infrequent anatomical variants. Future research will expand the dataset including a wider spectrum of normal anatomy and pathology to support sonographers in daily practice

    Learning-Based Median Nerve Segmentation From Ultrasound Images For Carpal Tunnel Syndrome Evaluation

    Get PDF
    : Carpal tunnel syndrome (CTS) is the most common entrapment neuropathy. Ultrasound imaging (US) may help to diagnose and assess CTS, through the evaluation of median nerve morphology. To support sonographers, this paper proposes a fully-automatic deep-learning approach to median nerve segmentation from US images. The approach relies on Mask R-CNN, a convolutional neural network that is trained end-to-end. The segmentation head of Mask R-CNN is here evaluated with three different configurations, with the goal of studying the effect of the segmentation-head output resolution on the overall Mask R-CNN segmentation performance. For this study, we collected and annotated a dataset of 151 images acquired in the actual clinical practice from 53 subjects with CTS. To our knowledge, this is the largest dataset in the field in terms of subjects. We achieved a median Dice similarity coefficient equal to 0.931 (IQR = 0.027), demonstrating the potentiality of the proposed approach. These results are a promising step towards providing an effective tool for CTS assessment in the actual clinical practice

    Development of a convolutional neural network for the identification and the measurement of the median nerve on ultrasound images acquired at carpal tunnel level

    No full text
    none10noDeep learning applied to ultrasound (US) can provide a feedback to the sonographer about the correct identification of scanned tissues and allows for faster and standardized measurements. The most frequently adopted parameter for US diagnosis of carpal tunnel syndrome is the increasing of the cross-sectional area (CSA) of the median nerve. Our aim was to develop a deep learning algorithm, relying on convolutional neural networks (CNNs), for the localization and segmentation of the median nerve and the automatic measurement of its CSA on US images acquired at the proximal inlet of the carpal tunnel.Smerilli, Gianluca; Cipolletta, Edoardo; Sartini, Gianmarco; Moscioni, Erica; Di Cosmo, Mariachiara; Fiorentino, Maria Chiara; Moccia, Sara; Frontoni, Emanuele; Grassi, Walter; Filippucci, EmilioSmerilli, Gianluca; Cipolletta, Edoardo; Sartini, Gianmarco; Moscioni, Erica; Di Cosmo, Mariachiara; Fiorentino, Maria Chiara; Moccia, Sara; Frontoni, Emanuele; Grassi, Walter; Filippucci, Emili
    corecore